34 research outputs found

    Characterization of uncertainty in Bayesian estimation using sequential Monte Carlo methods

    Get PDF
    In estimation problems, accuracy of the estimates of the quantities of interest cannot be taken for granted. This means that estimation errors are expected, and a good estimation algorithm should be able not only to compute estimates that are optimal in some sense, but also provide meaningful measures of uncertainty associated with those estimates. In some situations, we might also be able to reduce estimation uncertainty through the use of feedback on observations, an approach referred to as sensor management. Characterization of estimation uncertainty, as well as sensor management, are certainly difficult tasks for general partially observed processes, which might be non-linear, non-Gaussian, and/or have dependent process and observation noises. Sequential Monte Carlo (SMC) methods, also known as particle filters, are numerical Bayesian estimators which are, in principle, able to handle highly general estimation problems. However, SMC methods are known to suffer from a phenomenon called degeneracy, or self-resolving, which greatly impairs their usefulness against certain classes of problems. One of such classes, that we address in the first part of this thesis, is the joint state and parameter estimation problem, where there are unknown parameters to be estimated together with the timevarying state. Some SMC variants have been proposed to counter the degeneracy phenomenon for this problem, but these state-of-the-art techniques are either non-Bayesian or introduce biases on the system model, which might not be appropriate if proper characterization of estimation uncertainty is required. For this type of scenario, we propose using the Rao-Blackwellized Marginal Particle Filter (RBMPF), a combination of two SMC algorithm variants: the Rao-Blackwellized Particle Filter (RBPF) and the Marginal Particle Filter (MPF). We derive two new versions of the RBMPF: one for models with low dimensional parameter vectors, and another for more general models. We apply the proposed methods to two practical problems: the target tracking problem of turn rate estimation for a constant turn maneuver, and the econometrics problem of stochastic volatility estimation. Our proposed methods are shown to be effective solutions, both in terms of estimation accuracy and statistical consistency, i.e. characterization of estimation uncertainty. Another problem where standard particle filters suffer from degeneracy, addressed in the second part of this thesis, is the joint multi-target tracking and labelling problem. In comparison with the joint state and parameter estimation problem, this problem poses an additional challenge, namely, the fact that it has not been properly mathematically formulated in previous literature. Using Finite Set Statistics (FISST), we provide a sound theoretical formulation for the problem, and in order to actually solve the problem, we propose a novel Bayesian algorithm, the Labelling Uncertainty-Aware Particle Filter (LUA-PF) filter, essentially a combination of the RBMPF and the Multi-target Sequential Monte Carlo (M-SMC) filter techniques. We show that the new algorithm achieves significant improvements on both finding the correct track labelling and providing a meaningful measure of labelling uncertainty. In the last part of this thesis, we address the sensor management problem. Although we apply particle filters to the problem, they are not the main focus of this part of the work. Instead, we concentrate on a more fundamental question, namely, which sensor management criterion should be used in order to obtain the best results in terms of information gain and/or reduction of uncertainty. In order to answer this question, we perform an in-depth theoretical and empirical analysis on two popular sensor management criteria based on information theory – the Kullback-Leibler and R´enyi divergences. On the basis of this analysis, we are able to either confirm or reject some previous arguments used as theoretical justification for these two criteria

    A theoretical analysis of Bayes-optimal multi-target tracking and labelling

    Get PDF
    In multi-target tracking (MTT), we are often interested not only in finding the position of the multiple objects, but also allowing individual objects to be uniquely identified with the passage of time, by placing a label on each track. While there are many MTT algorithms that produce uniquely identified tracks as output, most of them make use of certain heuristics and/or unrealistic assumptions that makes the global result suboptimal of Bayesian sense. An innovative way of performing MTT is the so-called joint multi-target tracking, where the raw output of the algorithm, rather than being already the collection of output tracks, is a multi-target density calculated by approximating the Bayesian recursion that considers the entire system to have a single multidimensional state. The raw output, i.e. the calculated multi-target density, is thereafter processed to obtain output tracks to be displayed to the operator. This elegant approach, at least in theory, would allow us to precisely represent multi-target statistics. However, most joint MTT methods in the literature handle the problem of track labelling in an ad-hoc, i.e. non-Bayesian manner. A number of methods, however, have suggested that the multi-target density, calculated using the Bayesian recursion, should contain information not only about the location of the individual objects but also their identities. This approach, that we refer as joint MTTL (joint multi-target tracking and labelling), looks intuitively advantageous. It would allow us, at least in theory, to obtain an output consisting of labelled tracks that is optimal in Bayesian sense. Moreover, it would allow us to have statistical information about the assigned labels; for instance, we would know what is the probability that track swap may have occurred after some approximation of targets (or, in simpler words, we would know how much we can believe that a target is what the display says that it is). However, the methods proposed in the still emerging joint MTTL literature do not address some problems that may considerably reduce the usefulness of the approach. These problems include: track coalescence after targets move closely to each other, gradual loss of ambiguity information when particle filters or multiple hypotheses approaches are used, and dealing with unknown/varying number of targets. As we are going to see, each of the previously proposed methods handles only a subset of these problems. Moreover, while obtaining a Bayes-optimal output of labelled tracks is one of the main motivations for joint MTTL, how such output should be obtained is a matter of debate. This work will tackle the joint MTTL problem together with a companion memorandum. In this work, we look at the problem from a theoretical perspective, i.e. we aim to provide an accurate and algorithm-independent picture of the aforementioned problems. An algorithm that actually handles these problems will be proposed in the companion memorandum. As one of the contributions of the memorandum, we clearly characterize the so-called "mixed labelling" phenomenon that leads to track coalescence and other problems, and we verify that, unlike implied in previous literature, it is a physical phenomenon inherent of the MTTL problem rather than specific to a particular approach. We also show how mixed labelling leads to nontrivial issues in practical implementations of joint MTTL. As another of the contributions of the memorandum, we propose a conceptual, algorithm-independent track extraction method for joint MTTL estimators, that gives an output with clear physical interpretation for the user

    Particle filter approximations for general open loop and open loop feedback sensor management

    Get PDF
    Sensor management is a stochastic control problem where the control mechanism is directed at the generation of observations. Typically, sensor management attempts to optimize a certain statistic derived from the posterior distribution of the state, such as covariance or entropy. However, these statistics often depend on future measurements which are not available at the moment the control decision is taken, making it necessary to consider their expectation over the entire measurement space. Though the idea of computing such expectations using a particle filter is not new, so far it has been applied only to specific sensor management problems and criterions. In this memorandum, for a considerably broad class of problems, we explicitly show how particle filters can be used to approximate general sensor management criterions in the open loop and open loop feedback cases. As examples, we apply these approximations to selected sensor management criterions. As an additional contribution of this memorandum, we show that every performance metric can be used to define a corresponding estimate and a corresponding task-driven sensor management criterion, and both of them can be approximated using particle filters. This is used to propose an approximate sensor management scheme based on the OSPA metric for multi-target tracking, which is included among our examples

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements

    Search for single production of vector-like quarks decaying into Wb in pp collisions at s=8\sqrt{s} = 8 TeV with the ATLAS detector

    Get PDF

    Measurement of the charge asymmetry in top-quark pair production in the lepton-plus-jets final state in pp collision data at s=8TeV\sqrt{s}=8\,\mathrm TeV{} with the ATLAS detector

    Get PDF

    ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider

    Get PDF

    Measurements of top-quark pair differential cross-sections in the eμe\mu channel in pppp collisions at s=13\sqrt{s} = 13 TeV using the ATLAS detector

    Get PDF

    Measurement of the W boson polarisation in ttˉt\bar{t} events from pp collisions at s\sqrt{s} = 8 TeV in the lepton + jets channel with ATLAS

    Get PDF
    corecore